105 research outputs found

    Effects of Land Use in the Ohio River Basin on the Distribution of Coliform and Antibiotic Resistant Bacteria in the Ohio River

    Get PDF
    Recent studies indicate that antibiotic resistant bacteria can be useful as indicators of water quality (1, 2, 5, 7, 8, 9, 10). Studies in our laboratory have shown that fecal pollution did not fully explain the distribution or the frequency of antibiotic resistant bacteria in the Ohio River (27, 28). Therefore, it is important to understand the factors that affect the distribution of antibiotic resistant bacteria in aquatic habitat. The purpose of this study was to examine the correlations between land use, water quality, and concentration of antibiotic resistant bacteria in the Ohio River. Mid-channel water samples were collected at five mile intervals in the Ohio River and all major tributaries. Total cultivable bacteria and selected antibiotic resistant bacteria were cultivated on R2A agar. Antibiotic resistant total coliforms and Escherichia coli were enumerated using ColilertÔ reagent (IDEXX Laboratories, Inc., Westbrook, ME) and Quanti-Tray/2000Ô. Land use features were obtained from the national land cover data (NLCD) gathered from the USGS website. The data were then put into ArcGIS® (ESRI, Redlands, CA) and were used with microbiological data to analyze the association between land use and microbial communities. CANOCO 4.5 was used to determine the spatial differences between each site. Linear regression models were used to determine trends between land use and individual microbial communities. The data suggested residential, commercial and, in some cases, wetland land use types have a significant and proportional relationship and that farming and forested areas have a significant but inverse relationship between land use and bacterial abundance

    DiffusionSDF: Conditional Generative Modeling of Signed Distance Functions

    Full text link
    Probabilistic diffusion models have achieved state-of-the-art results for image synthesis, inpainting, and text-to-image tasks. However, they are still in the early stages of generating complex 3D shapes. This work proposes DiffusionSDF, a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds. We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks. Neural SDFs are implicit functions and diffusing them amounts to learning the reversal of their neural network weights, which we solve using a custom modulation module. Extensive experiments show that our method is capable of both realistic unconditional generation and conditional generation from partial inputs. This work expands the domain of diffusion models from learning 2D, explicit representations, to 3D, implicit representations

    Precision Enhancement of 3D Surfaces from Multiple Compressed Depth Maps

    Full text link
    In texture-plus-depth representation of a 3D scene, depth maps from different camera viewpoints are typically lossily compressed via the classical transform coding / coefficient quantization paradigm. In this paper we propose to reduce distortion of the decoded depth maps due to quantization. The key observation is that depth maps from different viewpoints constitute multiple descriptions (MD) of the same 3D scene. Considering the MD jointly, we perform a POCS-like iterative procedure to project a reconstructed signal from one depth map to the other and back, so that the converged depth maps have higher precision than the original quantized versions.Comment: This work was accepted as ongoing work paper in IEEE MMSP'201

    [(Pyrrolidin-1-yl)carbothio­ylsulfan­yl]methyl pyrrolidine-1-carbodithio­ate

    Get PDF
    The title compound, C11H18N2S4, was unexpectedly obtained during studies on the reactivity of the complex tris­(acac-κ2 O,O′)gallium(III) (acac is acetyl­acetonate) with C4H8NCS2H in dichloro­methane. The title compound shows disordered two pyrrolidine rings with major and minor occupancies of 0.546 (4) and 0.454 (4). Two (pyrrolidin-1-yl)carbothio­ylsulfanyl units are linked together through a methyl­ene C atom and weak C—H⋯S inter­actions are found

    Thin On-Sensor Nanophotonic Array Cameras

    Full text link
    Today's commodity camera systems rely on compound optics to map light originating from the scene to positions on the sensor where it gets recorded as an image. To record images without optical aberrations, i.e., deviations from Gauss' linear model of optics, typical lens systems introduce increasingly complex stacks of optical elements which are responsible for the height of existing commodity cameras. In this work, we investigate flat nanophotonic computational cameras as an alternative that employs an array of skewed lenslets and a learned reconstruction approach. The optical array is embedded on a metasurface that, at 700~nm height, is flat and sits on the sensor cover glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic response of a metasurface and design the array over the entire sensor, we propose a differentiable optimization method that continuously samples over the visible spectrum and factorizes the optical modulation for different incident fields into individual lenses. We reconstruct a megapixel image from our flat imager with a learned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior. To tackle scene-dependent aberrations in broadband, we propose a method for acquiring paired captured training data in varying illumination conditions. We assess the proposed flat camera design in simulation and with an experimental prototype, validating that the method is capable of recovering images from diverse scenes in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic

    SUBA: the Arabidopsis Subcellular Database

    Get PDF
    Knowledge of protein localisation contributes towards our understanding of protein function and of biological inter-relationships. A variety of experimental methods are currently being used to produce localisation data that need to be made accessible in an integrated manner. Chimeric fluorescent fusion proteins have been used to define subcellular localisations with at least 1100 related experiments completed in Arabidopsis. More recently, many studies have employed mass spectrometry to undertake proteomic surveys of subcellular components in Arabidopsis yielding localisation information for ∼2600 proteins. Further protein localisation information may be obtained from other literature references to analysis of locations (AmiGO: ∼900 proteins), location information from Swiss-Prot annotations (∼2000 proteins); and location inferred from gene descriptions (∼2700 proteins). Additionally, an increasing volume of available software provides location prediction information for proteins based on amino acid sequence. We have undertaken to bring these various data sources together to build SUBA, a SUBcellular location database for Arabidopsis proteins. The localisation data in SUBA encompasses 10 distinct subcellular locations, >6743 non-redundant proteins and represents the proteins encoded in the transcripts responsible for 51% of Arabidopsis expressed sequence tags. The SUBA database provides a powerful means by which to assess protein subcellular localisation in Arabidopsis ()

    Identification of novel DNA repair proteins via primary sequence, secondary structure, and homology

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>DNA repair is the general term for the collection of critical mechanisms which repair many forms of DNA damage such as methylation or ionizing radiation. DNA repair has mainly been studied in experimental and clinical situations, and relatively few information-based approaches to new extracting DNA repair knowledge exist. As a first step, automatic detection of DNA repair proteins in genomes via informatics techniques is desirable; however, there are many forms of DNA repair and it is not a straightforward process to identify and classify repair proteins with a single optimal method. We perform a study of the ability of homology and machine learning-based methods to identify and classify DNA repair proteins, as well as scan vertebrate genomes for the presence of novel repair proteins. Combinations of primary sequence polypeptide frequency, secondary structure, and homology information are used as feature information for input to a Support Vector Machine (SVM).</p> <p>Results</p> <p>We identify that SVM techniques are capable of identifying portions of DNA repair protein datasets without admitting false positives; at low levels of false positive tolerance, homology can also identify and classify proteins with good performance. Secondary structure information provides improved performance compared to using primary structure alone. Furthermore, we observe that machine learning methods incorporating homology information perform best when data is filtered by some clustering technique. Analysis by applying these methodologies to the scanning of multiple vertebrate genomes confirms a positive correlation between the size of a genome and the number of DNA repair protein transcripts it is likely to contain, and simultaneously suggests that all organisms have a non-zero minimum number of repair genes. In addition, the scan result clusters several organisms' repair abilities in an evolutionarily consistent fashion. Analysis also identifies several functionally unconfirmed proteins that are highly likely to be involved in the repair process. A new web service, INTREPED, has been made available for the immediate search and annotation of DNA repair proteins in newly sequenced genomes.</p> <p>Conclusion</p> <p>Despite complexity due to a multitude of repair pathways, combinations of sequence, structure, and homology with Support Vector Machines offer good methods in addition to existing homology searches for DNA repair protein identification and functional annotation. Most importantly, this study has uncovered relationships between the size of a genome and a genome's available repair repetoire, and offers a number of new predictions as well as a prediction service, both which reduce the search time and cost for novel repair genes and proteins.</p

    An update on the strategies in multicomponent activity monitoring within the phytopharmaceutical field

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To-date modern drug research has focused on the discovery and synthesis of single active substances. However, multicomponent preparations are gaining increasing importance in the phytopharmaceutical field by demonstrating beneficial properties with respect to efficacy and toxicity.</p> <p>Discussion</p> <p>In contrast to single drug combinations, a botanical multicomponent therapeutic possesses a complex repertoire of chemicals that belong to a variety of substance classes. This may explain the frequently observed pleiotropic bioactivity spectra of these compounds, which may also suggest that they possess novel therapeutic opportunities. Interestingly, considerable bioactivity properties are exhibited not only by remedies that contain high doses of phytochemicals with prominent pharmaceutical efficacy, but also preparations that lack a sole active principle component. Despite that each individual substance within these multicomponents has a low molar fraction, the therapeutic activity of these substances is established via a potentialization of their effects through combined and simultaneous attacks on multiple molecular targets. Although beneficial properties may emerge from such a broad range of perturbations on cellular machinery, validation and/or prediction of their activity profiles is accompanied with a variety of difficulties in generic risk-benefit assessments. Thus, it is recommended that a comprehensive strategy is implemented to cover the entirety of multicomponent-multitarget effects, so as to address the limitations of conventional approaches.</p> <p>Summary</p> <p>An integration of standard toxicological methods with selected pathway-focused bioassays and unbiased data acquisition strategies (such as gene expression analysis) would be advantageous in building an interaction network model to consider all of the effects, whether they were intended or adverse reactions.</p

    Performance and characterization of the SPT-3G digital frequency-domain multiplexed readout system using an improved noise and crosstalk model

    Get PDF
    The third-generation South Pole Telescope camera (SPT-3G) improves upon its predecessor (SPTpol) by an order of magnitude increase in detectors on the focal plane. The technology used to read out and control these detectors, digital frequency-domain multiplexing (DfMUX), is conceptually the same as used for SPTpol, but extended to accommodate more detectors. A nearly 5× expansion in the readout operating bandwidth has enabled the use of this large focal plane, and SPT-3G performance meets the forecasting targets relevant to its science objectives. However, the electrical dynamics of the higher-bandwidth readout differ from predictions based on models of the SPTpol system due to the higher frequencies used and parasitic impedances associated with new cryogenic electronic architecture. To address this, we present an updated derivation for electrical crosstalk in higher-bandwidth DfMUX systems and identify two previously uncharacterized contributions to readout noise, which become dominant at high bias frequency. The updated crosstalk and noise models successfully describe the measured crosstalk and readout noise performance of SPT-3G. These results also suggest specific changes to warm electronics component values, wire-harness properties, and SQUID parameters, to improve the readout system for future experiments using DfMUX, such as the LiteBIRD space telescope
    corecore